# Preference Optimization
Gemma 2 9b It SimPO
MIT
Gemma 2.9B model fine-tuned on the gemma2-ultrafeedback-armorm dataset using the SimPO objective for preference optimization tasks
Large Language Model
Transformers

G
princeton-nlp
21.34k
164
Llama 3 Instruct 8B SimPO
SimPO is a preference optimization method that eliminates the need for reference reward models, simplifying the traditional RLHF pipeline by directly optimizing language models with preference data.
Large Language Model
Transformers

L
princeton-nlp
1,924
58
Featured Recommended AI Models